最近,在预训练的GANS的潜在空间中发现可解释的方向已成为一个流行的话题。虽然现有的工作主要是考虑语义图像操纵的指示,我们专注于抽象财产:创造力。我们可以操纵图像或更少的创意吗?我们在最大的基于AI的创造力平台,艺术平台上建立工作,其中用户可以使用预先训练的GaN模型生成图像。我们探索在该平台上生成的图像的潜在维度,并提出了一种用于操纵图像的新框架,使其更具创意。我们的代码和数据集可用于http://github.com/catlab-team/latentcreative。
translated by 谷歌翻译
In this paper, we introduce a novel optimization algorithm for machine learning model training called Normalized Stochastic Gradient Descent (NSGD) inspired by Normalized Least Mean Squares (NLMS) from adaptive filtering. When we train a high-complexity model on a large dataset, the learning rate is significantly important as a poor choice of optimizer parameters can lead to divergence. The algorithm updates the new set of network weights using the stochastic gradient but with $\ell_1$ and $\ell_2$-based normalizations on the learning rate parameter similar to the NLMS algorithm. Our main difference from the existing normalization methods is that we do not include the error term in the normalization process. We normalize the update term using the input vector to the neuron. Our experiments present that the model can be trained to a better accuracy level on different initial settings using our optimization algorithm. In this paper, we demonstrate the efficiency of our training algorithm using ResNet-20 and a toy neural network on different benchmark datasets with different initializations. The NSGD improves the accuracy of the ResNet-20 from 91.96\% to 92.20\% on the CIFAR-10 dataset.
translated by 谷歌翻译